Penalized maximum-likelihood estimation of covariance matrices with linear structure

نویسنده

  • Timothy J. Schulz
چکیده

y In this paper, a space-alternating generalized expectation-maximization (SAGE) algorithm is presented for the numerical computation of maximum-likelihood (ML) and penalized maximum-likelihood (PML) estimates of the parameters of covariance matrices with linear structure for complex Gaussian processes. By using a less informative hidden-data space and a sequential parameter-update scheme, a SAGE-based algorithm is derived for which convergence of the likelihood is demonstrated to be signiicantly faster than that of an EM-based algorithm that has been previously proposed. In addition, the SAGE procedure is shown to easily accommodate penalty functions, and a SAGE-based algorithm is derived and demonstrated for forming PML estimates with a quadratic smoothness penalty. 3 Maximum-likelihood estimates obtained by applying 500 iterations of the EM-and SAGE-based algorithms: true parameters (dashed); EM or SAGE estimate (solid). : 10 4 Comparison of modiied log-likelihood versus iterations for the EM-and SAGE-based 5 Conditional Fisher information as a function of for various noise-to-signal ratios. 6 Maximum-likelihood estimates obtained by applying 100 iterations of the EM-and SAGE-based algorithms: true parameters (dashed); EM or SAGE estimate (solid). : 17 7 Comparison of modiied log-likelihood versus iterations for the EM-and SAGE-based 8 Maximum-likelihood parameter estimates obtained by applying 10 and 20 iterations of SAGE-based algorithm with = 1: true parameters (dashed); SAGE estimate 9 PML estimates obtained by applying the SAGE-based algorithm for various values of : true parameters (dashed); SAGE estimates (solid).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Penalized estimation of covariance matrices with flexible amounts of shrinkage

Penalized maximum likelihood estimation has been advocated for its capability to yield substantially improved estimates of covariance matrices, but so far only cases with equal numbers of records have been considered. We show that a generalization of the inverse Wishart distribution can be utilised to derive penalties which allow for differential penalization for different blocks of the matrice...

متن کامل

0 Sparse Inverse Covariance Estimation

Recently, there has been focus on penalized loglikelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting “norm” is the non-convex l0 penalty but its lack ...

متن کامل

Variable Selection for Joint Mean and Covariance Models via Penalized Likelihood

In this paper, we propose a penalized maximum likelihood method for variable selection in joint mean and covariance models for longitudinal data. Under certain regularity conditions, we establish the consistency and asymptotic normality of the penalized maximum likelihood estimators of parameters in the models. We further show that the proposed estimation method can correctly identify the true ...

متن کامل

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Explicit estimators of parameters in the Growth Curve model with linearly structured covariance matrices

Estimation of parameters in the classical Growth Curve model, when the covariance matrix has some specific linear structure, is considered. In our examples maximum likelihood estimators can not be obtained explicitly and must rely on optimization algorithms. Therefore explicit estimators are obtained as alternatives to the maximum likelihood estimators. From a discussion about residuals, a simp...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE Trans. Signal Processing

دوره 45  شماره 

صفحات  -

تاریخ انتشار 1997